Statistical Relational Learning: An Inductive Logic Programming Perspective

نویسنده

  • Luc De Raedt
چکیده

In the past few years there has been a lot of work lying at the intersection of probability theory, logic programming and machine learning [14, 18, 13, 9, 6, 1, 11]. This work is known under the names of statistical relational learning [7, 5], probabilistic logic learning [4], or probabilistic inductive logic programming. Whereas most of the existing works have started from a probabilistic learning perspective and extended probabilistic formalisms with relational aspects , I shall take a different perspective, in which I shall start from inductive logic programming and study how inductive logic programming formalisms, settings and techniques can be extended to deal with probabilistic issues. This tradition has already contributed a rich variety of valuable formalisms and techniques , including probabilistic Horn abduction by David Poole, PRISMs by Sato, stochastic logic programs by Muggleton [13] and Cussens [2], Bayesian logic programs [10, 8] by Kersting and De Raedt, and Logical Hidden Markov Models [11]. The main contribution of this talk is the introduction of three probabilistic inductive logic programming settings which are derived from the learning from entailment, from interpretations and from proofs settings of the field of induc-tive logic programming [3]. Each of these settings contributes different notions of probabilistic logic representations, examples and probability distributions. The first setting, probabilistic learning from entailment, is incorporated in the well-known PRISM system [19] and Cussens's Failure Adjusted Maximisation approach to parameter estimation in stochastic logic programs [2]. A novel system that was recently developed and that fits this paradigm is the nFOIL system [12]. It combines key principles of the well-known inductive logic programming system FOIL [15] with the na¨ıve Bayes' appraoch. In probabilistic learning from entailment, examples are ground facts that should be probabilistically entailed by the target logic program. The second setting, probabilistic learning from interpretations, is incorporated in Bayesian logic programs [10, 8], which integrate Bayesian networks with logic programs. This setting is also adopted by [6]. Examples in this setting are Herbrand interpretations that should be a proba-bilistic model for the target theory. The third setting, learning from proofs [17], is novel. It is motivated by the learning of stochastic context free grammars from tree banks. In this setting, examples are proof trees that should be proba-bilistically provable from the unknown stochastic logic programs. The sketched settings (and their instances presented) are by no means the only possible settings for probabilistic inductive logic programming, but …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Variable Precision Rough Set Inductive Logic Programming Model — a Statistical Relational Learning Perspective

The Variable Precision Rough Set Inductive Logic Programming model (VPRSILP model) extends the Variable Precision Rough Set (VPRS) model to Inductive Logic Programming (ILP). The VPRSILP model is considered from the Statistical Relational Learning perspective, by comparing and contrasting it with Stochastic Logic Programs.

متن کامل

Learning Constraint Satisfaction Problems: An ILP Perspective

We investigate the problem of learning constraint satisfaction problems from an inductive logic programming perspective. Constraint satisfaction problems are the underlying basis for constraint programming and there is a long standing interest in techniques for learning these. Constraint satisfaction problems are often described using a relational logic, so inductive logic programming is a natu...

متن کامل

An Inductive Logic Programming Approach to Statistical Relational Learning

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

متن کامل

Logic, Probability and Learning, or an Introduction to Statistical Relational Learning

Probabilistic inductive logic programming (PILP), sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed and they are being applied on applicat...

متن کامل

Inductive Logic Programming meets Relational Databases: An Application to Statistical Relational Learning

With the increasing amount of relational data, scalable approaches to faithfully model this data have become increasingly important. Statistical Relational Learning (SRL) approaches have been developed to learn in presence of noisy relational data by combining probability theory with first order logic. However most learning approaches for these models do not scale well to large datasets. While ...

متن کامل

An integrated development environment for probabilistic relational reasoning

This paper presents KReator, a versatile integrated development environment for probabilistic inductive logic programming currently under development. The area of probabilistic inductive logic programming (or statistical relational learning) aims at applying probabilistic methods of inference and learning in relational or first-order representations of knowledge. In the past ten years the commu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005